Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Radiol Artif Intell ; 4(3): e210206, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35652119

RESUMO

Femoral component subsidence following total hip arthroplasty (THA) is a worrisome radiographic finding. This study developed and evaluated a deep learning tool to automatically quantify femoral component subsidence between two serial anteroposterior (AP) hip radiographs. The authors' institutional arthroplasty registry was used to retrospectively identify patients who underwent primary THA from 2000 to 2020. A deep learning dynamic U-Net model was trained to automatically segment femur, implant, and magnification markers on a dataset of 500 randomly selected AP hip radiographs from 386 patients with polished tapered cemented femoral stems. An image processing algorithm was then developed to measure subsidence by automatically annotating reference points on the femur and implant, calibrating that with respect to magnification markers. Algorithm and manual subsidence measurements by two independent orthopedic surgeon reviewers in 135 randomly selected patients were compared. The mean, median, and SD of measurement discrepancy between the automatic and manual measurements were 0.6, 0.3, and 0.7 mm, respectively, and did not demonstrate a systematic tendency between human and machine. Automatic and manual measurements were strongly correlated and showed no evidence of significant differences. In contrast to the manual approach, the deep learning tool needs no user input to perform subsidence measurements. Keywords: Total Hip Arthroplasty, Femoral Component Subsidence, Artificial Intelligence, Deep Learning, Semantic Segmentation, Hip, Joints Supplemental material is available for this article. © RSNA, 2022.

2.
PLoS One ; 17(5): e0268829, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35604891

RESUMO

PURPOSE: To compare the inter-observer variability of apparent diffusion coefficient (ADC) values of prostate lesions measured by 2D-region of interest (ROI) with and without specific measurement instruction. METHODS: Forty lesions in 40 patients who underwent prostate MR followed by targeted prostate biopsy were evaluated. A multi-reader study (10 readers) was performed to assess the agreement of ADC values between 2D-ROI without specific instruction and 2D-ROI with specific instruction to place a 9-pixel size 2D-ROI covering the lowest ADC area. The computer script generated multiple overlapping 9-pixel 2D-ROIs within a 3D-ROI encompassing the entire lesion placed by a single reader. The lowest mean ADC values from each 2D-small-ROI were used as reference values. Inter-observer agreement was assessed using the Bland-Altman plot. Intraclass correlation coefficient (ICC) was assessed between ADC values measured by 10 readers and the computer-calculated reference values. RESULTS: Ten lesions were benign, 6 were Gleason score 6 prostate carcinoma (PCa), and 24 were clinically significant PCa. The mean±SD ADC reference value by 9-pixel-ROI was 733 ± 186 (10-6 mm2/s). The 95% limits of agreement of ADC values among readers were better with specific instruction (±112) than those without (±205). ICC between reader-measured ADC values and computer-calculated reference values ranged from 0.736-0.949 with specific instruction and 0.349-0.919 without specific instruction. CONCLUSION: Interobserver agreement of ADC values can be improved by indicating a measurement method (use of a specific ROI size covering the lowest ADC area).


Assuntos
Imagem de Difusão por Ressonância Magnética , Próstata , Imagem de Difusão por Ressonância Magnética/métodos , Humanos , Imageamento por Ressonância Magnética , Masculino , Variações Dependentes do Observador , Próstata/diagnóstico por imagem , Reprodutibilidade dos Testes , Estudos Retrospectivos
4.
J Digit Imaging ; 34(5): 1183-1189, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34047906

RESUMO

Imaging-based measurements form the basis of surgical decision making in patients with aortic aneurysm. Unfortunately, manual measurement suffer from suboptimal temporal reproducibility, which can lead to delayed or unnecessary intervention. We tested the hypothesis that deep learning could improve upon the temporal reproducibility of CT angiography-derived thoracic aortic measurements in the setting of imperfect ground-truth training data. To this end, we trained a standard deep learning segmentation model from which measurements of aortic volume and diameter could be extracted. First, three blinded cardiothoracic radiologists visually confirmed non-inferiority of deep learning segmentation maps with respect to manual segmentation on a 50-patient hold-out test cohort, demonstrating a slight preference for the deep learning method (p < 1e-5). Next, reproducibility was assessed by evaluating measured change (coefficient of reproducibility and standard deviation) in volume and diameter values extracted from segmentation maps in patients for whom multiple scans were available and whose aortas had been deemed stable over time by visual assessment (n = 57 patients, 206 scans). Deep learning temporal reproducibility was superior for measures of both volume (p < 0.008) and diameter (p < 1e-5) and reproducibility metrics compared favorably with previously reported values of manual inter-rater variability. Our work motivates future efforts to apply deep learning to aortic evaluation.


Assuntos
Aprendizado Profundo , Aorta , Humanos , Reprodutibilidade dos Testes
5.
J Arthroplasty ; 36(7): 2510-2517.e6, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33678445

RESUMO

BACKGROUND: Inappropriate acetabular component angular position is believed to increase the risk of hip dislocation after total hip arthroplasty. However, manual measurement of these angles is time consuming and prone to interobserver variability. The purpose of this study was to develop a deep learning tool to automate the measurement of acetabular component angles on postoperative radiographs. METHODS: Two cohorts of 600 anteroposterior (AP) pelvis and 600 cross-table lateral hip postoperative radiographs were used to develop deep learning models to segment the acetabular component and the ischial tuberosities. Cohorts were manually annotated, augmented, and randomly split to train-validation-test data sets on an 8:1:1 basis. Two U-Net convolutional neural network models (one for AP and one for cross-table lateral radiographs) were trained for 50 epochs. Image processing was then deployed to measure the acetabular component angles on the predicted masks for anatomical landmarks. Performance of the tool was tested on 80 AP and 80 cross-table lateral radiographs. RESULTS: The convolutional neural network models achieved a mean Dice similarity coefficient of 0.878 and 0.903 on AP and cross-table lateral test data sets, respectively. The mean difference between human-level and machine-level measurements was 1.35° (σ = 1.07°) and 1.39° (σ = 1.27°) for the inclination and anteversion angles, respectively. Differences of 5° or more between human-level and machine-level measurements were observed in less than 2.5% of cases. CONCLUSION: We developed a highly accurate deep learning tool to automate the measurement of angular position of acetabular components for use in both clinical and research settings. LEVEL OF EVIDENCE: III.


Assuntos
Artroplastia de Quadril , Aprendizado Profundo , Prótese de Quadril , Acetábulo/diagnóstico por imagem , Acetábulo/cirurgia , Artroplastia de Quadril/efeitos adversos , Prótese de Quadril/efeitos adversos , Humanos , Radiografia
6.
Radiology ; 299(2): 313-323, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33687284

RESUMO

Background Missing MRI sequences represent an obstacle in the development and use of deep learning (DL) models that require multiple inputs. Purpose To determine if synthesizing brain MRI scans using generative adversarial networks (GANs) allows for the use of a DL model for brain lesion segmentation that requires T1-weighted images, postcontrast T1-weighted images, fluid-attenuated inversion recovery (FLAIR) images, and T2-weighted images. Materials and Methods In this retrospective study, brain MRI scans obtained between 2011 and 2019 were collected, and scenarios were simulated in which the T1-weighted images and FLAIR images were missing. Two GANs were trained, validated, and tested using 210 glioblastomas (GBMs) (Multimodal Brain Tumor Image Segmentation Benchmark [BRATS] 2017) to generate T1-weighted images from postcontrast T1-weighted images and FLAIR images from T2-weighted images. The quality of the generated images was evaluated with mean squared error (MSE) and the structural similarity index (SSI). The segmentations obtained with the generated scans were compared with those obtained with the original MRI scans using the dice similarity coefficient (DSC). The GANs were validated on sets of GBMs and central nervous system lymphomas from the authors' institution to assess their generalizability. Statistical analysis was performed using the Mann-Whitney, Friedman, and Dunn tests. Results Two hundred ten GBMs from the BRATS data set and 46 GBMs (mean patient age, 58 years ± 11 [standard deviation]; 27 men [59%] and 19 women [41%]) and 21 central nervous system lymphomas (mean patient age, 67 years ± 13; 12 men [57%] and nine women [43%]) from the authors' institution were evaluated. The median MSE for the generated T1-weighted images ranged from 0.005 to 0.013, and the median MSE for the generated FLAIR images ranged from 0.004 to 0.103. The median SSI ranged from 0.82 to 0.92 for the generated T1-weighted images and from 0.76 to 0.92 for the generated FLAIR images. The median DSCs for the segmentation of the whole lesion, the FLAIR hyperintensities, and the contrast-enhanced areas using the generated scans were 0.82, 0.71, and 0.92, respectively, when replacing both T1-weighted and FLAIR images; 0.84, 0.74, and 0.97 when replacing only the FLAIR images; and 0.97, 0.95, and 0.92 when replacing only the T1-weighted images. Conclusion Brain MRI scans generated using generative adversarial networks can be used as deep learning model inputs in case MRI sequences are missing. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Zhong in this issue. An earlier incorrect version of this article appeared online. This article was corrected on April 12, 2021.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Aprendizado Profundo , Glioblastoma/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Linfoma/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Idoso , Meios de Contraste , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
7.
J Arthroplasty ; 36(6): 2197-2203.e3, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33663890

RESUMO

BACKGROUND: Dislocation is a common complication following total hip arthroplasty (THA), and accounts for a high percentage of subsequent revisions. The purpose of this study is to illustrate the potential of a convolutional neural network model to assess the risk of hip dislocation based on postoperative anteroposterior pelvis radiographs. METHODS: We retrospectively evaluated radiographs for a cohort of 13,970 primary THAs with 374 dislocations over 5 years of follow-up. Overall, 1490 radiographs from dislocated and 91,094 from non-dislocated THAs were included in the analysis. A convolutional neural network object detection model (YOLO-V3) was trained to crop the images by centering on the femoral head. A ResNet18 classifier was trained to predict subsequent hip dislocation from the cropped imaging. The ResNet18 classifier was initialized with ImageNet weights and trained using FastAI (V1.0) running on PyTorch. The training was run for 15 epochs using 10-fold cross validation, data oversampling, and augmentation. RESULTS: The hip dislocation classifier achieved the following mean performance (standard deviation): accuracy = 49.5 (4.1%), sensitivity = 89.0 (2.2%), specificity = 48.8 (4.2%), positive predictive value = 3.3 (0.3%), negative predictive value = 99.5 (0.1%), and area under the receiver operating characteristic curve = 76.7 (3.6%). Saliency maps demonstrated that the model placed the greatest emphasis on the femoral head and acetabular component. CONCLUSION: Existing prediction methods fail to identify patients at high risk of dislocation following THA. Our radiographic classifier model has high sensitivity and negative predictive value, and can be combined with clinical risk factor information for rapid assessment of risk for dislocation following THA. The model further suggests radiographic locations which may be important in understanding the etiology of prosthesis dislocation. Importantly, our model is an illustration of the potential of automated imaging artificial intelligence models in orthopedics. LEVEL OF EVIDENCE: Level III.


Assuntos
Artroplastia de Quadril , Aprendizado Profundo , Luxação do Quadril , Prótese de Quadril , Artroplastia de Quadril/efeitos adversos , Inteligência Artificial , Luxação do Quadril/diagnóstico por imagem , Luxação do Quadril/epidemiologia , Prótese de Quadril/efeitos adversos , Humanos , Estudos Retrospectivos , Fatores de Risco
8.
Radiol Artif Intell ; 2(5): e190183, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33937839

RESUMO

PURPOSE: To develop a deep learning model that segments intracranial structures on head CT scans. MATERIALS AND METHODS: In this retrospective study, a primary dataset containing 62 normal noncontrast head CT scans from 62 patients (mean age, 73 years; age range, 27-95 years) acquired between August and December 2018 was used for model development. Eleven intracranial structures were manually annotated on the axial oblique series. The dataset was split into 40 scans for training, 10 for validation, and 12 for testing. After initial training, eight model configurations were evaluated on the validation dataset and the highest performing model was evaluated on the test dataset. Interobserver variability was reported using multirater consensus labels obtained from the test dataset. To ensure that the model learned generalizable features, it was further evaluated on two secondary datasets containing 12 volumes with idiopathic normal pressure hydrocephalus (iNPH) and 30 normal volumes from a publicly available source. Statistical significance was determined using categorical linear regression with P < .05. RESULTS: Overall Dice coefficient on the primary test dataset was 0.84 ± 0.05 (standard deviation). Performance ranged from 0.96 ± 0.01 (brainstem and cerebrum) to 0.74 ± 0.06 (internal capsule). Dice coefficients were comparable to expert annotations and exceeded those of existing segmentation methods. The model remained robust on external CT scans and scans demonstrating ventricular enlargement. The use of within-network normalization and class weighting facilitated learning of underrepresented classes. CONCLUSION: Automated segmentation of CT neuroanatomy is feasible with a high degree of accuracy. The model generalized to external CT scans as well as scans demonstrating iNPH.Supplemental material is available for this article.© RSNA, 2020.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...